34 research outputs found

    Monitoring squeezed collective modes of a one-dimensional Bose gas after an interaction quench using density ripples analysis

    Full text link
    We investigate the out-of-equilibrium dynamics following a sudden quench of the interaction strength, in a one-dimensional quasi-condensate trapped at the surface of an atom chip. Within a linearized approximation, the system is described by independent collective modes and the quench squeezes the phase space distribution of each mode, leading to a subsequent breathing of each quadrature. We show that the collective modes are resolved by the power spectrum of density ripples which appear after a short time of flight. This allows us to experimentally probe the expected breathing phenomenon. Our results are in good agreement with theoretical predictions which take the longitudinal harmonic confinement into account

    Generalized HydroDynamics on an Atom Chip

    Get PDF
    The emergence of a special type of fluid-like behavior at large scales in one-dimensional (1d) quantum integrable systems, theoretically predicted in 2016, is established experimentally, by monitoring the time evolution of the in situ density profile of a single 1d cloud of 87Rb^{87}{\rm Rb} atoms trapped on an atom chip after a quench of the longitudinal trapping potential. The theory can be viewed as a dynamical extension of the thermodynamics of Yang and Yang, and applies to the whole range of repulsion strength and temperature of the gas. The measurements, performed on weakly interacting atomic clouds that lie at the crossover between the quasicondensate and the ideal Bose gas regimes, are in very good agreement with the 2016 theory. This contrasts with the previously existing 'conventional' hydrodynamic approach---that relies on the assumption of local thermal equilibrium---, which is unable to reproduce the experimental data.Comment: v1: 6+11 pages, 4+4 figures. v2: published version, 6+11 pages, 4+6 figure

    Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Workers Through Explainable Artificial Intelligence

    Get PDF
    While recent advances in AI-based automated decision-making have shown many benefits for businesses and society, they also come at a cost. It has long been known that a high level of automation of decisions can lead to various drawbacks, such as automation bias and deskilling. In particular, the deskilling of knowledge workers is a major issue, as they are the same people who should also train, challenge, and evolve AI. To address this issue, we conceptualize a new class of DSS, namely Intelligent Decision Assistance (IDA) based on a literature review of two different research streams---DSS and automation. IDA supports knowledge workers without influencing them through automated decision-making. Specifically, we propose to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations. To test this conceptualization, we develop hypotheses on the impacts of IDA and provide the first evidence for their validity based on empirical studies in the literature

    Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence

    Get PDF
    While recent advances in AI-based automated decision-making have shown many benefits for businesses and society, they also come at a cost. It has for long been known that a high level of automation of decisions can lead to various drawbacks, such as automation bias and deskilling. In particular, the deskilling of knowledge workers is a major issue, as they are the same people who should also train, challenge and evolve AI. To address this issue, we conceptualize a new class of DSS, namely Intelligent Decision Assistance (IDA) based on a literature review of two different research streams -- DSS and automation. IDA supports knowledge workers without influencing them through automated decision-making. Specifically, we propose to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations. To test this conceptualization, we develop hypotheses on the impacts of IDA and provide first evidence for their validity based on empirical studies in the literature

    Explainable AI for Constraint-Based Expert Systems

    Get PDF
    The need to derive explanations from machine learning (ML)-based AI systems has been addressed in recent research due to the opaqueness of their processing.However, a significant amount of productive AI systems are not based on ML but are expert systems including strong opaqueness.A resulting lack of understanding causes massive inefficiencies in business processes that involve opaque expert systems. This work uses recent research interest in explainable AI (XAI) to generate knowledge for the design of explanations in constraint-based expert systems.Following the Design Science Research paradigm, we develop design requirements and design principles. Subsequently, we design an artifact and evaluate the artifact in two experiments. We observe the following phenomena. First, global explanations in a textual format were well-received. Second, abstract local explanations improved comprehensibility. Third, contrastive explanations successfully assisted in the resolution of contradictions. Finally, a local tree-based explanation was perceived as challenging to understand

    A simplified design of a cEEGrid ear-electrode adapter for the OpenBCI biosensing platform

    Get PDF
    We present a simplified design of an ear-centered sensing system built around the OpenBCI Cyton & Daisy biosignal amplifiers and the flex-printed cEEGrid ear-EEG electrodes. This design reduces the number of components that need to be sourced, reduces mechanical artefacts on the recording data through better cable placement, and simplifies the assembly. Besides describing how to replicate and use the system, we highlight promising application scenarios, particularly the observation of large-amplitude activity patterns (e.g., facial muscle activities) and frequency-band neural activity (e.g., alpha and beta band power modulations for mental workload detection). Further, examples for common measurement artefacts and methods for removing them are provided, introducing a prototypical application of adaptive filters to this system. Lastly, as a promising use case, we present findings from a single-user study that highlights the system\u27s capability of detecting jaw clenching events robustly when contrasted with 26 other facial activities. Thereby, the system could, for instance, be used to devise applications that reduce pathological jaw clenching and teeth grinding (bruxism). These findings underline that the system represents a valuable prototyping platform for advancing ear-based electrophysiological sensing systems and a low-cost alternative to current commercial alternatives

    Artificial intelligence and machine learning

    Get PDF
    Within the last decade, the application of "artificial intelligence" and "machine learning" has become popular across multiple disciplines, especially in information systems. The two terms are still used inconsistently in academia and industry—sometimes as synonyms, sometimes with different meanings. With this work, we try to clarify the relationship between these concepts. We review the relevant literature and develop a conceptual framework to specify the role of machine learning in building (artificial) intelligent agents. Additionally, we propose a consistent typology for AI-based information systems. We contribute to a deeper understanding of the nature of both concepts and to more terminological clarity and guidance—as a starting point for interdisciplinary discussions and future research

    On the Influence of Explainable AI on Automation Bias

    Get PDF
    Artificial intelligence (AI) is gaining momentum, and its importance for the future of work in many areas, such as medicine and banking, is continuously rising. However, insights on the effective collaboration of humans and AI are still rare. Typically, AI supports humans in decision-making by addressing human limitations. However, it may also evoke human bias, especially in the form of automation bias as an over-reliance on AI advice. We aim to shed light on the potential to influence automation bias by explainable AI (XAI). In this pre-test, we derive a research model and describe our study design. Subsequentially, we conduct an online experiment with regard to hotel review classifications and discuss first results. We expect our research to contribute to the design and development of safe hybrid intelligence systems
    corecore